Due to the issue that existing wireless sensor network (WSN)-based anomaly detection methods only consider and analyze temporal features, in this paper, a self-supervised learning-based anomaly node detection method based on an autoencoder is designed. This method integrates temporal WSN data flow feature extraction, spatial position feature extraction and intermodal WSN correlation feature extraction into the design of the autoencoder to make full use of the spatial and temporal information of the WSN for anomaly detection. First, a fully connected network is used to extract the temporal features of nodes by considering a single mode from a local spatial perspective. Second, a graph neural network (GNN) is used to introduce the WSN topology from a global spatial perspective for anomaly detection and extract the spatial and temporal features of the data flows of nodes and their neighbors by considering a single mode. Then, the adaptive fusion method involving weighted summation is used to extract the relevant features between different models. In addition, this paper introduces a gated recurrent unit (GRU) to solve the long-term dependence problem of the time dimension. Eventually, the reconstructed output of the decoder and the hidden layer representation of the autoencoder are fed into a fully connected network to calculate the anomaly probability of the current system. Since the spatial feature extraction operation is advanced, the designed method can be applied to the task of large-scale network anomaly detection by adding a clustering operation. Experiments show that the designed method outperforms the baselines, and the F1 score reaches 90.6%, which is 5.2% higher than those of the existing anomaly detection methods based on unsupervised reconstruction and prediction. Code and model are available at https://github.com/GuetYe/anomaly_detection/GLSL
translated by 谷歌翻译
Persuasion modeling is a key building block for conversational agents. Existing works in this direction are limited to analyzing textual dialogue corpus. We argue that visual signals also play an important role in understanding human persuasive behaviors. In this paper, we introduce the first multimodal dataset for modeling persuasion behaviors. Our dataset includes 199 dialogue transcriptions and videos captured in a multi-player social deduction game setting, 26,647 utterance level annotations of persuasion strategy, and game level annotations of deduction game outcomes. We provide extensive experiments to show how dialogue context and visual signals benefit persuasion strategy prediction. We also explore the generalization ability of language models for persuasion modeling and the role of persuasion strategies in predicting social deduction game outcomes. Our dataset, code, and models can be found at https://persuasion-deductiongame.socialai-data.org.
translated by 谷歌翻译
Physics-Informed Neural Networks (PINNs) have recently been proposed to solve scientific and engineering problems, where physical laws are introduced into neural networks as prior knowledge. With the embedded physical laws, PINNs enable the estimation of critical parameters, which are unobservable via physical tools, through observable variables. For example, Power Electronic Converters (PECs) are essential building blocks for the green energy transition. PINNs have been applied to estimate the capacitance, which is unobservable during PEC operations, using current and voltage, which can be observed easily during operations. The estimated capacitance facilitates self-diagnostics of PECs. Existing PINNs are often manually designed, which is time-consuming and may lead to suboptimal performance due to a large number of design choices for neural network architectures and hyperparameters. In addition, PINNs are often deployed on different physical devices, e.g., PECs, with limited and varying resources. Therefore, it requires designing different PINN models under different resource constraints, making it an even more challenging task for manual design. To contend with the challenges, we propose Automated Physics-Informed Neural Networks (AutoPINN), a framework that enables the automated design of PINNs by combining AutoML and PINNs. Specifically, we first tailor a search space that allows finding high-accuracy PINNs for PEC internal parameter estimation. We then propose a resource-aware search strategy to explore the search space to find the best PINN model under different resource constraints. We experimentally demonstrate that AutoPINN is able to find more accurate PINN models than human-designed, state-of-the-art PINN models using fewer resources.
translated by 谷歌翻译
How do we know when the predictions made by a classifier can be trusted? This is a fundamental problem that also has immense practical applicability, especially in safety-critical areas such as medicine and autonomous driving. The de facto approach of using the classifier's softmax outputs as a proxy for trustworthiness suffers from the over-confidence issue; while the most recent works incur problems such as additional retraining cost and accuracy versus trustworthiness trade-off. In this work, we argue that the trustworthiness of a classifier's prediction for a sample is highly associated with two factors: the sample's neighborhood information and the classifier's output. To combine the best of both worlds, we design a model-agnostic post-hoc approach NeighborAgg to leverage the two essential information via an adaptive neighborhood aggregation. Theoretically, we show that NeighborAgg is a generalized version of a one-hop graph convolutional network, inheriting the powerful modeling ability to capture the varying similarity between samples within each class. We also extend our approach to the closely related task of mislabel detection and provide a theoretical coverage guarantee to bound the false negative. Empirically, extensive experiments on image and tabular benchmarks verify our theory and suggest that NeighborAgg outperforms other methods, achieving state-of-the-art trustworthiness performance.
translated by 谷歌翻译
Sensors in cyber-physical systems often capture interconnected processes and thus emit correlated time series (CTS), the forecasting of which enables important applications. The key to successful CTS forecasting is to uncover the temporal dynamics of time series and the spatial correlations among time series. Deep learning-based solutions exhibit impressive performance at discerning these aspects. In particular, automated CTS forecasting, where the design of an optimal deep learning architecture is automated, enables forecasting accuracy that surpasses what has been achieved by manual approaches. However, automated CTS solutions remain in their infancy and are only able to find optimal architectures for predefined hyperparameters and scale poorly to large-scale CTS. To overcome these limitations, we propose SEARCH, a joint, scalable framework, to automatically devise effective CTS forecasting models. Specifically, we encode each candidate architecture and accompanying hyperparameters into a joint graph representation. We introduce an efficient Architecture-Hyperparameter Comparator (AHC) to rank all architecture-hyperparameter pairs, and we then further evaluate the top-ranked pairs to select a final result. Extensive experiments on six benchmark datasets demonstrate that SEARCH not only eliminates manual efforts but also is capable of better performance than manually designed and existing automatically designed CTS models. In addition, it shows excellent scalability to large CTS.
translated by 谷歌翻译
Adapter Tuning, which freezes the pretrained language models (PLMs) and only fine-tunes a few extra modules, becomes an appealing efficient alternative to the full model fine-tuning. Although computationally efficient, the recent Adapters often increase parameters (e.g. bottleneck dimension) for matching the performance of full model fine-tuning, which we argue goes against their original intention. In this work, we re-examine the parameter-efficiency of Adapters through the lens of network pruning (we name such plug-in concept as \texttt{SparseAdapter}) and find that SparseAdapter can achieve comparable or better performance than standard Adapters when the sparse ratio reaches up to 80\%. Based on our findings, we introduce an easy but effective setting ``\textit{Large-Sparse}'' to improve the model capacity of Adapters under the same parameter budget. Experiments on five competitive Adapters upon three advanced PLMs show that with proper sparse method (e.g. SNIP) and ratio (e.g. 40\%) SparseAdapter can consistently outperform their corresponding counterpart. Encouragingly, with the \textit{Large-Sparse} setting, we can obtain further appealing gains, even outperforming the full fine-tuning by a large margin. Our code will be released at: https://github.com/Shwai-He/SparseAdapter.
translated by 谷歌翻译
对比性语言图像预训练(剪辑)已被证明可以学习具有出色传递性的视觉表示,从而实现了零击分类的有希望的准确性。为了进一步提高其下游性能,现有作品在剪辑上提出了其他可学习的模块,并通过几次训练集对其进行微调。但是,由此产生的额外培训成本和数据要求严重阻碍了模型部署和知识转移的效率。在本文中,我们引入了一种自由午餐的增强方法CALIP,以通过无参数注意模块来提高Clip的零拍摄性能。具体而言,我们指导视觉和文本表示相互交互,并通过注意探索跨模式的信息特征。由于预训练大大降低了两种方式之间的嵌入距离,因此我们在注意力中丢弃所有可学习的参数,并在双向更新多模式特征,从而使整个过程无参数且无培训。通过这种方式,图像与文本感知信号混合在一起,文本表示形式被视觉引导以获得更好的自适应零射击对齐。我们在14个数据集的各种基准上评估CALIP,用于2D图像和3D Point Cloud几乎没有分类,显示出一致的零弹性性能改进了夹子。基于此,我们进一步在Calip的注意模块中插入了少量线性层,并在少量射击设置下验证我们的鲁棒性,与现有方法相比,这也可以实现领先的性能。这些广泛的实验证明了我们的方法在有效增强夹子方面的优势。
translated by 谷歌翻译
我们研究了具有连续状态的可观察到的马尔可夫决策过程(POMDPS)的非政策评估问题(OPE)。由最近提出的近端因果推理框架的动机,我们开发了一个非参数识别结果,以通过时间依赖性代理变量的帮助通过所谓的V-bridge函数来估算策略值。然后,我们开发一种拟合的Q评估类型算法来递归估算V桥功能,其中每个步骤都解决了非参数仪器变量(NPIV)问题。通过分析这个具有挑战性的顺序NPIV问题,我们建立了用于估计V桥功能的有限样本误差界限,并因此根据样本量,地平线和所谓(本地)度量来评估策略值,以评估策略值每个步骤都不适。据我们所知,这是非参数模型下POMDP中OPE绑定的第一个有限样本误差。
translated by 谷歌翻译
在马尔可夫决策过程(MDP)中,可能存在不可观察的混杂因素并对数据生成过程产生影响,因此经典的非政策评估(OPE)估计器可能无法识别目标策略的真实价值函数。在本文中,我们研究了与可观察的仪器变量混杂的MDP中OPE的统计特性。具体而言,我们根据仪器变量提出了一个两阶段估计器,并在具有线性结构的混杂MDP中建立了其统计属性。对于非反应分析,我们证明了一个$ \ Mathcal {o}(n^{ - 1/2})$ - 错误绑定了$ n $是样本的数量。对于渐近分析,我们证明了两阶段估计量在渐近正常上,典型速率为$ n^{1/2} $。据我们所知,我们是第一个通过仪器变量显示混合线性MDP的两阶段估计量的统计结果。
translated by 谷歌翻译
现有的最佳3D对象检测器通常依赖于多模式融合策略。但是,由于忽略了特定于模式的有用信息,因此从根本上限制了该设计,并最终阻碍了模型性能。为了解决这一局限性,在这项工作中,我们介绍了一种新型的模式相互作用策略,在该策略中,在整个过程中学习和维护单个单模式表示,以使其在物体检测过程中被利用其独特特征。为了实现这一建议的策略,我们设计了一个深层互动体系结构,其特征是多模式代表性交互编码器和多模式预测交互解码器。大规模Nuscenes数据集的实验表明,我们所提出的方法经常超过所有先前的艺术。至关重要的是,我们的方法在竞争激烈的Nuscenes对象检测排行榜上排名第一。
translated by 谷歌翻译